Reinforced Transformer with Cross-Lingual Distillation for Cross-Lingual Aspect Sentiment Classification

نویسندگان

چکیده

Though great progress has been made in the Aspect-Based Sentiment Analysis(ABSA) task through research, most of previous work focuses on English-based ABSA problems, and there are few efforts other languages mainly due to lack training data. In this paper, we propose an approach for performing a Cross-Lingual Aspect Classification (CLASC) which leverages rich resources one language (source language) aspect sentiment classification under-resourced (target language). Specifically, first build bilingual lexicon domain-specific data translate category annotated source-language corpus then sentences from source target via Machine Translation (MT) tools. However, MT systems general-purpose, it non-avoidably introduces translation ambiguities would degrade performance CLASC. context, novel called Reinforced Transformer with Distillation (RTCLD) combined target-sensitive adversarial learning minimize undesirable effects sentence translation. We conduct experiments different combinations, treating English as Chinese, Russian, Spanish languages. The experimental results show that our proposed outperforms state-of-the-art methods

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Aspect-Level Cross-lingual Sentiment Classification with Constrained SMT

Most cross-lingual sentiment classification (CLSC) research so far has been performed at sentence or document level. Aspect-level CLSC, which is more appropriate for many applications, presents the additional difficulty that we consider subsentential opinionated units which have to be mapped across languages. In this paper, we extend the possible cross-lingual sentiment analysis settings to asp...

متن کامل

Cross-lingual Distillation for Text Classification

Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (ind...

متن کامل

Co-Training for Cross-Lingual Sentiment Classification

The lack of Chinese sentiment corpora limits the research progress on Chinese sentiment classification. However, there are many freely available English sentiment corpora on the Web. This paper focuses on the problem of cross-lingual sentiment classification, which leverages an available English corpus for Chinese sentiment classification by using the English corpus as training data. Machine tr...

متن کامل

Active Learning for Cross-Lingual Sentiment Classification

Cross-lingual sentiment classification aims to predict the sentiment orientation of a text in a language (named as the target language) with the help of the resources from another language (named as the source language). However, current cross-lingual performance is normally far away from satisfaction due to the huge difference in linguistic expression and social culture. In this paper, we sugg...

متن کامل

Cross-Lingual Mixture Model for Sentiment Classification

The amount of labeled sentiment data in English is much larger than that in other languages. Such a disproportion arouse interest in cross-lingual sentiment classification, which aims to conduct sentiment classification in the target language (e.g. Chinese) using labeled data in the source language (e.g. English). Most existing work relies on machine translation engines to directly adapt labele...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Electronics

سال: 2021

ISSN: ['2079-9292']

DOI: https://doi.org/10.3390/electronics10030270